A representational redescription method using competitive learning
نویسندگان
چکیده
In a previous simulation a network learned labeling a small set of stimuli in three input conditions (visual features, label, label + visual features), classifying them according to colour, function, object name. Results suggested that in different input conditions the network internal representation reflects the explicit semantic structure of stimuli. Evidence on the mediating role of linguistic label was suggested by cluster analysis: in the three input conditions a single object is represented very similarly but it has different representations in the label + features condition, depending on the label. One limit of this kind of models is that transferring acquired knowledge to other tasks would require network retraining. A second shortcoming is that this model allows only one level of knowledge representation. Empirical evidence shows that knowledge must be represented at different levels, ranging from implicit to full explicit symbolism. A similar view has been suggested by the “representational redescription” hypothesis (Clark & Karmiloff-Smith, 1993). In order to test how to meet these requirements, we augmented our model requiring the network to use the already acquired knowledge to extract the semantic structure of the stimulus set, for each of the three subtasks. The hidden unit layer was connected to a new module with three clusters of output units. Each output cluster had to make explicit the structure in each of the three categorization subtasks. To train this new module, all the other connection weights were frozen except the connection from the hidden layer to the new output units. These connection weights were trained with the competitive learning algorithm. The results show that the network is able to exploit previously acquired knowledge and to make explicit the stimuli semantic structure using the hidden (implicit) representation. This structure corresponds to that obtained by cluster analysis in the previous research. This method can be considered a first step in testing the representational redescription hypothesis. It will require further exploration and testing of more complete models. Some related issues are discussed, such as the controversial need of hybrid models.
منابع مشابه
Cascade-correlation as a model of representational redescription
How does knowledge come to be manipulable and flexible, and transferable to other tasks? These are issues which remain largely untackled in connectionist cognitive modelling. The Representational Redescription Hypothesis (RRH) (Karmiloff-Smith, 1992b) presents a framework for the emergence of abstract, higher-order knowledge, based on empirical work from developmental psychology. The RRH claims...
متن کاملRepresentational Redescription for Sea Slugs
The paper discusses the Representational Redescription Hypothesis of Clark and Karmiloff-Smith [1]. This hypothesis -the ’RRH’ for short -has two main parts to it. Part one suggests that human cognitive development is a process involving the redescription of knowledge at increasing levels of abstraction. Part two is a claim about the true nature of thought. It states that representational redes...
متن کاملRedescription, Information and Access
Over the past two decades Annette Karmiloff-Smith has developed a theory of cognition that depicts the mind as dynamic and changing rather than static and fixed (e. g. this volume). It maintains that cognitive development, in both adults and children, goes through a cycle of modularisation and explicitation, in which modules form and their contents are then explicitated to become available to o...
متن کاملRepresentational redescription: the next challenge?
Drawing inspiration from developmental psychology, it has been suggested to build cognitive architectures that allow robots to progressively acquire abstract representations [4]. Humans don’t have a single optimal representation of the problems they solve. They can redescribe the information they have acquired in different formats [5]. It allows them to explore different representations and use...
متن کاملRepresentational Distance Learning for Deep Neural Networks
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance mat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000